Despite the potential of health information technology (HIT) systems to significantly reduce medical errors, streamline clinical processes, contain healthcare costs, and ultimately improve the quality of healthcare, their adoption by hospitals in the United States has been rather slow. To study this adoption process and get insights into the underlying mechanisms, in this work we synthesize the theories on social networks and knowledge transfer. We propose a research framework in which the absorptive capacity of a potential adopter and the collective disseminative capacity of connected adopters act as two key determinants of knowledge transfer in a socioeconomic network, and these two capacities substitute for each other in affecting HIT adoption. We also propose that, in a network setting, the mechanism of knowledge transfer manifests quite differently from that of social contagion in its impact on the diffusion process at different stages of adoption. Using a large longitudinal data set covering adoption decisions of more than five thousand hospitals across a thirteen-year horizon, we find strong support for our hypotheses. Our analysis shows that knowledge flow in provider networks plays a key role in fostering technology diffusion in initial years, allowing the contagion effect to set in sooner for quicker adoption in later years. Therefore, recent efforts at multiple levels to form integrated healthcare delivery networks should accelerate HIT adoption.
In recent years, we have witnessed an unprecedented growth in the security software market. This market is now fiercely competitive with hundreds of nearly identical products; yet, the price is high and coverage low. Although recent research has examined such idiosyncrasies and found the existence of a negative network effect as a possible explanation, several important questions still remain: (1) What possibly discourages product differentiation in such a competitive market? (2) Why is versioning absent here? (3) How does the presence of free alternatives in this market impact its structure? We develop a comprehensive oligopoly model, with endogenous quality and versioning decisions, to address these issues. Our analyses reveal that, although the presence of numerous competitors leads to a greater need to differentiate, the network effect in this market works as a counterweight, incentivizing vendors to sacrifice differentiation in favor of collocating in the top end of the quality spectrum. We explain the reasons and implications of this important finding. We further show that this result is robust and applicable even when versioning by competing vendors or the presence of free software is taken into consideration. Furthermore, given that the presence of free software actually intensifies competitive pressure and heightens the need to differentiate, the role of the network effect in abating differentiation becomes even more discernible.
We examine how network centrality and closure, two key aspects of network structure, affect technology adoption. In doing so, we consider the content of potential information flows within the network and argue that the impact of network structure on technology adoption can be better understood by separately examining its impact from two groups of alters—current and potential adopters. We contend that increased network centrality and closure among current adopters contribute positively to adoption, whereas the same among potential adopters has exactly the opposite impact. Accordingly, we propose a dynamic view where the fraction of current adopters in the network positively moderates the impact of network centrality and closure. We empirically test the theory by analyzing the adoption of software version control technology by open source software projects. Our results strongly support the theory.
The usefulness of a software product becomes obvious to consumers only after they get to experience it and, upon experiencing it, they may reach different conclusions regarding its true value. We examine the problem of designing free software trials under a general learning function. Our analyses lead to several new findings. We find that a time-locked trial is optimal only when the rate of learning is sufficiently large. It is not optimal in other situations, even when it has an overall positive effect on consumers' valuations. We also find that positive network effects have a minimal impact on this optimality. Interestingly, we find that neither the optimal trial period nor the optimal price is monotonically increasing in the rate of learning. At moderate rates, the software manufacturer pursues a dual strategy of offering a longer trial as well as a lower price. At higher rates of learning, the manufacturer does the opposite. Our results are robust, and incorporating possibilities such as a trial providing a signal of quality or learning being correlated with prior valuation has little impact on their applicability.
The market for security software has witnessed an unprecedented growth in recent years. A closer examination of this market reveals certain idiosyncrasies that are not observed in a traditional market. For example, it is a highly competitive market with over 80 vendors. Yet the market coverage is relatively low. Prior research has not attempted to explain what makes this market so different. In this paper, we develop an economic model to find possible answers to this question. Our model uses existing classification of different types of attacks and models their resulting network effects. We find that the negative network effect from indirect attacks, which is further enhanced by value-based targeted attacks, provides a possible explanation for the unique structure of this market. Overall, our results highlight the unique nature of the security software market, furnish rigorous arguments for several counterintuitive observations in the real world, and provide managerial insights for vendors on market competition.
Outsourcing of software development allows a business to focus on its core competency and take advantage of vendors' technical expertise, economies of scale and scope, and their ability to smooth labor demand fluctuations across several clients. However, contracting a software project to an outside developer is often quite challenging because of information asymmetry and incentive divergence. A typical software development contract must deal with a variety of interrelated issues such as the quality of the developed system, the timeliness of delivery, the effort and cost associated with the project, the contract payment, and the postdelivery software support. This paper presents a contract-theoretic model that incorporates these factors to analyze how software outsourcing contracts can be designed. We find that despite their relative inefficiency, fixed-price contracts are often appropriate for simple software projects that require short development time. Time-and-materials contracts work well for more complex projects when the auditing process is efficient and effective. We also examine a type of performance-based contract called quality-level agreement and find that the first-best solution can be reached with such a contract. Finally, we consider profit-sharing contracts that are useful in situations where the developer has more bargaining power.
The inherent uncertainty pervasive over the real world often forces business decisions to be made using uncertain data. The conventional relational model does not have the ability to handle uncertain data. In recent years, several approaches have been proposed in the literature for representing uncertain data by extending the relational model, primarily using probability theory. The aspect of database modification, however, has not been addressed in prior research. It is clear that any modification of existing probabilistic data, based on new information, amounts to the revision of one's belief about real-world objects. In this paper, we examine the aspect of belief revision and develop a generalized algorithm that can be used for the modification of existing data in a probabilistic relational database. The belief revision scheme is shown to be closed, consistent, and complete.